Efficient Approximate Inference for Online Probabilistic Plan Recognition

نویسنده

  • Hung H. Bui
چکیده

We present a new general framework for online probabilistic plan recognition called the Abstract Hidden Markov Memory Model (AHM M). The new model is an extension of the existing Abstract Hidden Markov Model to allow the policy to have internal memory which can be updated in a Markov fashion. We show that the AHM M can represent a richer class of probabilistic plans, and at the same time derive an efficient algorithm for plan recognition in the AHM M based on the Rao-Blackwellised Particle Filter approximate inference method. Introduction The ability to perform plan recognition can be very useful in a wide range of applications such as monitoring and surveillance, decision supports, and team work. However the plan recognizing agent’s task is usually complicated by the uncertainty in the plan refinement process, in the outcomes of actions, and in the agent’s observations of the plan. Dealing with these issues in plan recognition is a challenging task, especially when the recognition has to be done online so that the observer can react to the actor’s plan in real-time. The uncertainty problem has been addressed by the seminal work (Charniak & Goldman 1993) which phrases the plan recognition problem as the inference problem in a Bayesian network representing the process of executing the actor’s plan. More recent work has considered dynamic models for performing plan recognition online (Pynadath & Wellman 1995; 2000; Goldmand, Geib, & Miller 1999; Huber, Durfee, & Wellman 1994; Albrecht, Zukerman, & Nicholson 1998). While this offers a coherent way of modelling and dealing with various sources of uncertainty in the plan execution model, the computational complexity and scalability of inference is the main issue, especially for dynamic models. Inference in dynamic models such as the Dynamic Bayesian Networks (DBN) is more difficult than in a static model. Inference in a static network utilizes the sparse structure of the graphical model to make it tractable. In the dynamic case, the DBN belief state that we need to maintain usually does not preserve the conditional independence Copyright c 2002, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. properties of the single time-slice network, making exact inference intractable even when the DBN has a sparse structure. Thus, online plan recognition algorithms based on exact inference will run into problems when the belief state becomes too large, and will be unable to scale up to larger or more detailed plan hierarchies. In order to achieve scalability, we need approximation methods that can utilize the special structure of the plan execution process for efficiency. In our previous work (Bui, Venkatesh, & West 2000a; 2000b), we have proposed a new framework for online plan recognition based on the Abstract Hidden Markov Models (AHMM). The AHMM is a stochastic model for representing the execution of a hierarchy of contingent plans (termed policies). Studying the structure of the AHMM reveals a set of special context-specific independence properties typical of a stochastic plan execution model which can be exploited for efficient inference. We have derived an approximate inference scheme for the AHMM based on the Rao-Blackwellised Particle Filter (RBPF), and shown that this algorithm scales well w.r.t. the number of levels in the plan hierarchy. While our work on the AHMM is among the first to address the issue of complexity and scalability in online probabilistic plan recognition, the model that we have considered is somewhat limited in its representational power. Among these limitations of the AHMM is the inability to represent uninterrupted sequence of plans and actions. This is due to the fact that each policy in the AHMM is purely reactive on the current state and has no memory. This type of memoryless policies cannot represent a sequence of uninterrupted sub-plans since they have no way of remembering the sub-plan in the sequence that is currently being executed. Thus, the decision to choose the next sub-plan is only dependent on the current state, and not on the sub-plans that have been chosen in the past. Other models for plan recognition such that the Probabilistic State Dependent Grammar (PSDG) (Pynadath & Wellman 2000; Pynadath 1999) are more expressive and do not have this limitation. Unfortunately, the existing exact inference method for the PSDG in (Pynadath 1999) has been found by us to be flawed and inadequate. The main motivation in this paper is to extend the existing AHMM framework to allow for policies with memories to be considered. We propose an extension of the From: AAAI Technical Report FS-02-05. Compilation copyright © 2002, AAAI (www.aaai.org). All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A General Model for Online Probabilistic Plan Recognition

We present a new general framework for online probabilistic plan recognition called the Abstract Hidden Markov Memory Model (AHMEM). The new model is an extension of the existing Abstract Hidden Markov Model to allow the policy to have internal memory which can be updated in a Markov fashion. We show that the AHMEM can represent a richer class of probabilistic plans, and at the same time derive...

متن کامل

Policy Recognition in the Abstract Hidden Markov Model

In this paper, we present a method for recognising an agent’s behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent’s plan. Our contributions in this paper are two fold. In terms of...

متن کامل

Hybrid Symbolic-Probabilistic Plan Recognizer : Initial steps

It is important for agents to model other agents’ unobserved plans and goals, based on their observable actions, a process known as plan recognition. Plan recognition often takes the form of matching observations of an agent’s actions to a planlibrary, a model of possible plans selected by the agent. In this paper, we present efficient algorithms that handle a number of key capabilities implied...

متن کامل

Probabilistic Plan Recognition in Multiagent Systems

We present a theoretical framework for online probabilistic plan recognition in cooperative multiagent systems. Our model extends the Abstract Hidden Markov Model (AHMM) (Bui, Venkatesh, & West 2002), and consists of a hierarchical dynamic Bayes network that allows reasoning about the interaction among multiple cooperating agents. We provide an in-depth analysis of two different policy terminat...

متن کامل

Probabilistic State-Dependent Grammars for Plan Recognition

Techniques for plan recognition under uncertainty require a stochastic model of the plangeneration process. We introduce probabilistic state-dependent grammars (PSDGs) to represent an agent’s plan-generation process. The PSDG language model extends probabilistic contextfree grammars (PCFGs) by allowing production probabilities to depend on an explicit model of the planning agent’s internal and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002